205 research outputs found
Exploiting Domain-Specific Structures For End-User Programming Support Tools
In previous work we have tried to transfer ideas that have been successful in
general-purpose programming languages and mainstream software engineering into
the realm of spreadsheets, which is one important example of an end-user
programming environment.
More specifically, we have addressed the questions of how to employ the
concepts of type checking, program generation and maintenance, and testing in
spreadsheets. While the primary objective of our work has been to offer
improvements for end-user productivity, we have tried to follow two
particular principles to guide our research.
(1) Keep the number of new concepts to be learned by end users at a minimum.
(2) Exploit as much as possible information offered by the internal
structure of spreadsheets.
In this short paper we will illustrate our research approach with several examples
A Calculus for Variational Programming
Variation is ubiquitous in software. Many applications can benefit from making this variation explicit, then manipulating and computing with it directly---a technique we call "variational programming". This idea has been independently discovered in several application domains, such as efficiently analyzing and verifying software product lines, combining bounded and symbolic model-checking, and computing with alternative privacy profiles. Although these
domains share similar core problems, and there are also many similarities in the solutions, there is no dedicated programming language support for variational programming. This makes the various implementations tedious, prone to errors, hard to maintain and reuse, and difficult to compare.
In this paper we present a calculus that forms the basis of a programming language with explicit support for representing, manipulating, and computing with variation in programs and data. We illustrate how such a language can simplify the implementation of variational programming tasks. We present the syntax and semantics of the core calculus, a sound type system, and a type inference algorithm that produces principal types
Automatically inferring ClassSheet models from spreadsheets
Many errors in spreadsheet formulas can be avoided if spreadsheets are built automatically from higher-level models that can encode and enforce consistency constraints.
However, designing such models is time consuming and requires expertise beyond the knowledge to work with spreadsheets. Legacy spreadsheets pose a particular challenge to the approach of controlling spreadsheet evolution through higher-level models, because the need for a model might be overshadowed by two problems: (A) The benefit of creating a spreadsheet is lacking since the legacy spreadsheet already exists, and (B) existing data must be transferred into the new model-generated spreadsheet.To address these problems and to support the modeldriven spreadsheet engineering approach, we have developed a tool that can automatically infer ClassSheet models from spreadsheets. To this end, we have adapted a method to infer entity/relationship models from relational database to the spreadsheets/ClassSheets realm. We have implemented our techniques in the HAEXCEL framework and integrated it with the ViTSL/Gencel spreadsheet generator, which allows the automatic generation of refactored spreadsheets from the inferred ClassSheet model. The resulting spreadsheet guides further changes and provably safeguards the spreadsheet against a large class of formula errors. The developed tool is a significant contribution to spreadsheet (reverse) engineering, because it fills an important gap and allows a promising design method (ClassSheets) to be applied to a huge collection of legacy spreadsheets with minimal effort.(undefined
Non-linear Pattern Matching with Backtracking for Non-free Data Types
Non-free data types are data types whose data have no canonical forms. For
example, multisets are non-free data types because the multiset has
two other equivalent but literally different forms and .
Pattern matching is known to provide a handy tool set to treat such data types.
Although many studies on pattern matching and implementations for practical
programming languages have been proposed so far, we observe that none of these
studies satisfy all the criteria of practical pattern matching, which are as
follows: i) efficiency of the backtracking algorithm for non-linear patterns,
ii) extensibility of matching process, and iii) polymorphism in patterns.
This paper aims to design a new pattern-matching-oriented programming
language that satisfies all the above three criteria. The proposed language
features clean Scheme-like syntax and efficient and extensible pattern matching
semantics. This programming language is especially useful for the processing of
complex non-free data types that not only include multisets and sets but also
graphs and symbolic mathematical expressions. We discuss the importance of our
criteria of practical pattern matching and how our language design naturally
arises from the criteria. The proposed language has been already implemented
and open-sourced as the Egison programming language
Recommended from our members
A Visual Language for Explaining Probabilistic Reasoning
We present an explanation-oriented, domain-specific, visual language for explaining probabilistic reasoning. Explanation-oriented programming is a new paradigm that shifts the focus of programming from the computation of results to explanations of how those results were computed. Programs in this language therefore describe explanations of probabilistic reasoning problems. The language relies on a story-telling metaphor of explanation, where the reader is guided through a series of well-understood steps from some initial state to the final result. Programs can also be manipulated according to a set of laws to automatically generate equivalent explanations from one explanation instance. This increases the explanatory value of the language by allowing readers to cheaply derive alternative explanations if they do not understand the first. The language is comprised of two parts: a formal textual notation for specifying explanation-producing programs and the more elaborate visual notation for presenting those explanations. We formally define the abstract syntax of explanations and define the semantics of the textual notation in terms of the explanations that are produced.This is an author's peer-reviewed final manuscript, as accepted by the publisher. The published article is copyrighted by Elsevier and can be found at: http://www.journals.elsevier.com/journal-of-visual-languages-and-computing/.Keywords: Story telling, Explanation, Language semantics, Probability, Explanation transformationKeywords: Story telling, Explanation, Language semantics, Probability, Explanation transformatio
Recommended from our members
Systematic identification and communication of type errors
When type inference fails, it is often difficult to pinpoint the cause of the type error among many potential candidates. Generating informative messages to remove the type error is another difficult task due to the limited availability of type information. Over the last three decades many approaches have been developed to help debug type errors. However, most of these methods suffer from one or more of the following problems: (1) Being incomplete, they miss the real cause. (2) They cover many potential causes without distinguishing them. (3) They provide little or no information for how to remove the type error. Any one of this problems can turn the type-error debugging process into a tedious and ineffective endeavor. To address this issue, we have developed a method named counter-factual typing, which (1) finds a comprehensive set of error causes in AST leaves, (2) computes an informative message on how to get rid of the type error for each error cause, and (3) ranks all messages and iteratively presents the message for the most likely error cause. The biggest technical challenge is the efficient generation of all error messages, which seems to be exponential in the size of the expression. We address this challenge by employing the idea of variational typing that systematically reuses computations for shared parts and generates all messages by typing the whole ill-typed expression only once. We have evaluated our approach over a large set of examples collected from previous publications in the literature. The evaluation result shows that our approach outperforms previous approaches and is computationally feasible
Recommended from our members
Extending Type Inference to Variational Programs
Through the use of conditional compilation and related tools, many software projects can be used to generate a huge
number of related programs. The problem of typing such variational software is difficult. The brute-force strategy
of generating all variants and typing each one individually is (1) usually infeasible for efficiency reasons and (2)
produces results that do not map well to the underlying variational program. Recent research has focused mainly
on efficiency and addressed only the problem of type checking. In this work we tackle the more general problem of
variational type inference and introduce variational types to represent the result of typing a variational program. We
introduce the variational lambda calculus (VLC) as a formal foundation for research on typing variational programs.
We define a type system for VLC in which VLC expressions are mapped to correspondingly variational types. We
show that the type system is correct by proving that the typing of expressions is preserved over the process of
variation elimination, which eventually results in a plain lambda calculus expression and its corresponding type.
We identify a set of equivalence rules for variational types and prove that the type unification problem modulo these
equivalence rules is unitary and decidable; we also present a sound and complete unification algorithm. Based on
the unification algorithm, the variational type inference algorithm is an extension of algorithm W. We show that
it is sound and complete and computes principal types. We also consider the extension of VLC with sum types, a
necessary feature for supporting variational data types, and demonstrate that the previous theoretical results also
hold under this extension. Finally, we characterize the complexity of variational type inference and demonstrate the
efficiency gains over the brute-force strategy.This is an author's peer-reviewed final manuscript, as accepted by the publisher. The published article is copyrighted by the Association for Computing Machinery and can be found at: http://toplas.acm.org/.Keywords: variational type inference, variational types, variational lambda calculu
Model inference for spreadsheets
Many errors in spreadsheet formulas can be avoided if spreadsheets are built automati-
cally from higher-level models that can encode and enforce consistency constraints in the generated
spreadsheets. Employing this strategy for legacy spreadsheets is dificult, because the model has
to be reverse engineered from an existing spreadsheet and existing data must be transferred into
the new model-generated spreadsheet.
We have developed and implemented a technique that automatically infers relational schemas
from spreadsheets. This technique uses particularities from the spreadsheet realm to create better
schemas. We have evaluated this technique in two ways: First, we have demonstrated its appli-
cability by using it on a set of real-world spreadsheets. Second, we have run an empirical study
with users. The study has shown that the results produced by our technique are comparable to
the ones developed by experts starting from the same (legacy) spreadsheet data.
Although relational schemas are very useful to model data, they do not t well spreadsheets as
they do not allow to express layout. Thus, we have also introduced a mapping between relational
schemas and ClassSheets. A ClassSheet controls further changes to the spreadsheet and safeguards
it against a large class of formula errors. The developed tool is a contribution to spreadsheet
(reverse) engineering, because it lls an important gap and allows a promising design method
(ClassSheets) to be applied to a huge collection of legacy spreadsheets with minimal effort.We would like to thank Orlando Belo for his help on running and analyzing the empirical study. We would also like to thank Paulo Azevedo for his help in conducting the statistical analysis of our empirical study. We would also like to thank the anonymous reviewers for their suggestions which helped us to improve the paper. This work is funded by ERDF - European Regional Development Fund through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT - Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) within project FCOMP-01-0124-FEDER-010048. The first author was also supported by FCT grant SFRH/BPD/73358/2010
A DSEL for Studying and Explaining Causation
We present a domain-specific embedded language (DSEL) in Haskell that
supports the philosophical study and practical explanation of causation. The
language provides constructs for modeling situations comprised of events and
functions for reliably determining the complex causal relationships that emerge
between these events. It enables the creation of visual explanations of these
causal relationships and a means to systematically generate alternative,
related scenarios, along with corresponding outcomes and causes. The DSEL is
based on neuron diagrams, a visual notation that is well established in
practice and has been successfully employed for causation explanation and
research. In addition to its immediate applicability by users of neuron
diagrams, the DSEL is extensible, allowing causation experts to extend the
notation to introduce special-purpose causation constructs. The DSEL also
extends the notation of neuron diagrams to operate over non-boolean values,
improving its expressiveness and offering new possibilities for causation
research and its applications.Comment: In Proceedings DSL 2011, arXiv:1109.032
- …